Skip to content

Conversation

@HaileyStorm
Copy link

@HaileyStorm HaileyStorm commented Jan 5, 2026

MOV vs DUP Bounty — Proof Report

This report explains what was broken, what is fixed now, why the fixes are correct, and why MOV can be faster in some cases yet slower in others. It also documents why a universally “always‑faster and always‑correct” MOV is not achievable in this runtime, using both the f3 (new by me) and Payor repros.

1) Problem recap

The bounty requires that MOV and DUP encodings produce the same output under -C1, and that MOV finishes under 50k interactions on the provided program. In the original state, MOV differed from DUP because the runtime memoized MOV‑produced lambdas and accidentally shared mutable binder slots across uses. A second failure mode appeared when a MOV binder was used 3+ times, violating linearity without explicit duplication.

Two minimal repros make the issues concrete:

  • Payor repro: reduction can erase branch structure and force multiple independent uses into a single residual computation. If a MOV‑produced lambda is memoized, those uses share a mutable binder slot, and later applications can overwrite earlier bindings. This changes semantics.
  • f3 repro: a MOV binder is effectively used three times through nested duplication. Without explicit duplication, the runtime reuses a single MOV slot multiple times, violating linearity and creating aliasing.

2) Root cause

In HVM4’s C runtime, application performs in‑place substitution through binder slots / SUB cells (via heap_subst_var). If a MOV‑bound value reduces to a LAM and we memoize it, multiple uses share the same consumable binder slot. That lets later applications overwrite earlier bindings and changes the program’s result. This is the root cause of the Payor mismatch. Separately, MOV binders used 3+ times must be made explicit (duplication) to preserve linearity; otherwise a single MOV binder is consumed multiple times.

3) Fixes present in the current code

The current code fixes both failure modes without inventing new semantics. These are the changes intended for PR:

3.1 SAFE_MOV: disable MOV‑LAM memoization by default (only)

Files: hvm4_static/clang/wnf/mov_lam.c, hvm4_static/clang/hvm4.c, hvm4_static/clang/main.c

What changed: when a MOV value reduces to a LAM, the runtime does not memoize the MOV expr slot (no heap_subst_var on the MOV slot). Each use gets a fresh lambda wrapper instead of sharing a mutable binder slot.

Why: prevents multiple applications from clobbering one shared binder cell.

Scope note: SAFE_MOV only affects MOV‑LAM memoization. MOV‑NOD / MOV‑SUP / MOV‑RED still memoize normally.

Impact: correctness for Payor; small overhead in lambda‑heavy MOV paths because each use rebuilds the lambda wrapper.

Toggle: set HVM4_SAFE_MOV=0 to enable legacy memoization. Default is safe (memoization off for MOV‑LAM).

3.2 Auto‑dup for MOV uses > 2 (parser‑level)

File: hvm4_static/clang/parse/term/mov.c

What changed: if a MOV binder is used more than twice, parse_auto_dup(..., BJM, 0) rewrites the body to make extra uses explicit. The MOV value is consumed once and the extra uses are routed through DUPs.

Why BJM is correct here: at parse time, MOV‑bound variables are represented as BJM (see hvm4_static/clang/parse/term/var.c, MOV binder path). parse_auto_dup operates on the parser’s de‑Bruijn representation (BJV/BJ0/BJ1/BJM), not runtime GOTs.

Why: preserves linearity under nested duplication and fixes the f3 repro.

Impact: correctness for 3+ uses; small compile‑time rewrite and extra DUPs at runtime. This is required for semantics and is not optional.

3.3 Minimal MOV/DUP dispatch (no special MOV‑under‑DUP fast paths)

File: hvm4_static/clang/wnf/_.c

What changed: WNF dispatch stays aligned with upstream; no custom MOV‑under‑DUP or DUP/MOV fast paths. This avoids adding administrative interactions that were inflating counts without correctness benefit.

3.4 Safe‑atom MOV fast paths

Files: hvm4_static/clang/wnf/mov_nod.c, hvm4_static/clang/wnf/mov_sup.c

What changed: when all fields are immutable atoms (e.g., NUM, NAM, ERA, quoted vars), skip GOT allocation and directly reconstruct the node. In mov_nod, this is guarded by ari <= 16 to keep a small stack buffer.

Why: avoid needless GOT indirections for trivial data.

Impact: small perf win on atom‑heavy values; no semantic change.

4) Why MOV can be faster (and when)

MOV can reduce interactions when it avoids explicit duplication and avoids propagating duplication through large structures. Typical wins:

  • Branch‑local uses: when a value is consumed in each branch once and the runtime can preserve branch separation, MOV avoids building a DUP chain and avoids the associated admin work.
  • Value is large and structured: DUP causes propagation through structure; MOV can share the value and expand only when needed.
  • Atom‑heavy values: the safe‑atom fast path avoids GOT allocations altogether.

This is why the bounty program drops from ~97k interactions in the DUP encoding to ~24.8k in the MOV encoding.

5) Why MOV can be slower (and when)

MOV adds administrative overhead even when it is correct:

  1. Extra GOT work: MOV expands values by creating GOTs and redirecting through them. If a value is used only once, those GOTs are pure overhead.
  2. SAFE_MOV and LAMs: correctness requires producing a fresh lambda wrapper for each use of a MOV‑produced lambda. That can be more work than a DUP that propagates once.
  3. MOV interaction overhead: MOV‑NOD / MOV‑SUP / MOV‑RED / MOV‑DRY create extra heap traffic and indirections that can dominate small programs.

The Payor repro is a concrete case where MOV is correct but slower (22 vs 17 interactions). So “MOV is never slower” is not generally true without additional restrictions.

6) Why an “always‑faster and always‑correct MOV” is not achievable (in this runtime)

This is the conceptual boundary for the current MOV design (memoized sharing of potentially consumable structures like LAM). The key fact is that “used once per branch” is not a reduction invariant. During reduction, branch structure can vanish (e.g., same‑label SUP annihilation), collapsing the computation into a single residual term that simultaneously needs multiple branch‑indexed components. In such cases, sharing a single mutable binder slot becomes incorrect: the second use can overwrite the first. Correctness then forces duplication. Once duplication is forced, MOV cannot be universally faster than explicit DUP in this runtime without stronger primitives.

Both repros illustrate this:

  • Payor repro: reduction collapses branch structure, forcing two independent uses into one residual computation. Any memoized shared lambda slot becomes incorrect; correctness requires separation (duplication).
  • f3 repro: nested duplication yields 3 uses of a MOV binder. Without explicit duplication, linearity is violated. The runtime must introduce duplication to preserve semantics.

Conclusion: a “perfect” MOV that is always faster and always correct is not possible in this runtime without changing the calculus or adding new, stronger primitives.

7) Evidence (commands and outputs)

All commands run using hvm4_static/clang/main built with gcc -O2.

Correctness: f3 repro

./hvm4_static/clang/main hvm4_static/test/mov_bounty_f3.hvm4 -C1
./hvm4_static/clang/main hvm4_static/test/mov_bounty_f3_dup.hvm4 -C1

Output (both):

#O{#I{#E{}}}

Correctness: Payor repro

./hvm4_static/clang/main hvm4_static/test/payor_repro.hvm4 -C1
./hvm4_static/clang/main hvm4_static/test/payor_repro_dup.hvm4 -C1

Output (both):

λa.a

Unsafe mode reproduces the bug

With HVM4_SAFE_MOV=0, MOV‑LAM memoization is re‑enabled and the Payor repro diverges from the DUP result (unsafe behavior). This demonstrates that memoizing MOV‑produced lambdas is unsound in this runtime.

HVM4_SAFE_MOV=0 ./hvm4_static/clang/main hvm4_static/test/payor_repro.hvm4 -C1

Output:

λa.λb.λc.c

Performance: Full bounty program (MOV)

./hvm4_static/clang/main hvm4_static/test/mov_bounty.hvm4 -C1 -s

Output:

#O{#O{#O{#O{#I{#O{#E{}}}}}}}  #24805
- Itrs: 24805 interactions

Status: correct output, meets <50k target for the bounty program and the added repro/test set.

Control: Full bounty program (DUP)

./hvm4_static/clang/main hvm4_static/test/mov_bounty_dup.hvm4 -C1 -s

Output:

#O{#O{#O{#O{#I{#O{#E{}}}}}}}  #97885
- Itrs: 97885 interactions

Example where MOV is correct but slower

./mov_vs_dup_slow.sh

This runs payor_repro (MOV) and payor_repro_dup (DUP) with -s. MOV is correct but shows more interactions (22 vs 17).

MOV‑LAM microbench (SAFE_MOV on/off)

Files:

  • hvm4_static/test/mov_lam_bench.hvm4
  • hvm4_static/test/mov_lam_bench_dup.hvm4
  • mov_lam_bench.sh

Run:

./mov_lam_bench.sh

Observed:

  • MOV (SAFE_MOV=1 default): 366 interactions
  • MOV (SAFE_MOV=0, memoize MOV‑LAM): 363 interactions
  • DUP: 237 interactions
    Conclusion: memoizing MOV‑LAM yields only a tiny improvement on this microbench and still trails DUP.

MOV‑LAM erase microbench

Files:

  • hvm4_static/test/mov_lam_erase_bench.hvm4
  • hvm4_static/test/mov_lam_erase_bench_dup.hvm4

Observed:

  • MOV (SAFE_MOV=1 default): 16 interactions
  • MOV (SAFE_MOV=0, memoize MOV‑LAM): 16 interactions
  • DUP: 17 interactions
    Conclusion: memoization does not improve this case (already minimal).

8) Files changed

  • hvm4_static/clang/hvm4.c — SAFE_MOV flag and env override.
  • hvm4_static/clang/main.c — SAFE_MOV initialization.
  • hvm4_static/clang/wnf/mov_lam.c — skip MOV‑LAM memoization when SAFE_MOV is enabled.
  • hvm4_static/clang/parse/term/mov.c — auto‑dup for uses > 2.
  • hvm4_static/clang/wnf/_.c — dispatch aligned with upstream (no MOV‑under‑DUP special paths).
  • hvm4_static/clang/wnf/mov_nod.c, hvm4_static/clang/wnf/mov_sup.c — safe‑atom fast paths.
  • Test additions (bounty and repros): hvm4_static/test/mov_bounty.hvm4, hvm4_static/test/mov_bounty_dup.hvm4, hvm4_static/test/mov_bounty_f3.hvm4, hvm4_static/test/mov_bounty_f3_dup.hvm4, hvm4_static/test/payor_repro.hvm4, hvm4_static/test/payor_repro_dup.hvm4.
  • Additional test coverage and minimizers: hvm4_static/test/mov_bounty_min.hvm4, hvm4_static/test/mov_bounty_min_dup.hvm4, hvm4_static/test/mov_bounty_small.hvm4, hvm4_static/test/mov_bounty_small_dup.hvm4, hvm4_static/test/mov_bounty_tiny.hvm4, hvm4_static/test/mov_bounty_tiny_dup.hvm4, hvm4_static/test/mov_dup_func.hvm4, hvm4_static/test/mov_dup_nested.hvm4, hvm4_static/test/mov_dup_test.hvm4, hvm4_static/test/mov_erase_test.hvm4, hvm4_static/test/mov_erase_test_dup.hvm4, hvm4_static/test/mov_lam_bench.hvm4, hvm4_static/test/mov_lam_bench_dup.hvm4, hvm4_static/test/mov_lam_dupvar.hvm4, hvm4_static/test/mov_lam_erase_bench.hvm4, hvm4_static/test/mov_lam_erase_bench_dup.hvm4, hvm4_static/test/mov_lam_test.hvm4.
  • Scripts: mov_vs_dup_slow.sh, mov_lam_bench.sh.

9) Test suite run

./hvm4_static/test/_all_.sh

Outcome: PASS. The script reports clang: command not found but continues using the existing clang/main binary.

10) Letter vs spirit of the bounty

  • Letter: satisfied. MOV and DUP encodings match under -C1 for the bounty program and the added repro/test set, and MOV completes the bounty program in 24,805 interactions (<50k).
  • Spirit: satisfied in the sense that MOV remains a net win on the bounty program, while correctness comes first. MOV can still be slower on some small programs though (Payor repro), which is an inherent limitation of this runtime’s representation and the need to avoid unsafe sharing. A more complete explanation for why the "perfect" MOV cannot exist is also provided, though of course it depends in part on Payor's example.

11) Remaining risks / follow‑ups

  • MOV can be slower in some cases; this is expected and explained above.
  • A universal “never slower” guarantee is not possible without changing the calculus or adding stronger primitives.
  • Further performance work should focus on safe, local optimizations (e.g., more atom fast paths) rather than memoizing MOV‑LAMs.

@HaileyStorm
Copy link
Author

I'm hoping the silence so far is a tentative good sign :)

(Other bounty PRs quickly had a comment showing they were invalid.)

@HaileyStorm
Copy link
Author

HaileyStorm commented Jan 7, 2026

Also, I just realized I left the mov_nod and mov_sup and mov_red memoization guards in. I committed that change. There was no impact in validity or iteration count of any of the included tests, but this should speed up some scenarios (and I don't think it will introduce problems like mov_lam does).

@Lorenzobattistela
Copy link
Collaborator

We're going to review that shortly, impressive work anyways!

@Lorenzobattistela
Copy link
Collaborator

Hey @HaileyStorm, take a look at the following snippet:

@O = λp. λo. λi. λe. o(p)
@I = λp. λo. λi. λe. i(p)
@E =     λo. λi. λe. e
@N =     λc. λs. λn. n

@view = λxs.
  ! O = λp. #O{@view(p)}
  ! I = λp. #I{@view(p)}
  ! E = #E
  xs(O, I, E)

@rep2 = λ&f. λx. f(f(x))
@rep3 = λ&f. λx. f(f(f(x)))

@insert_mov = λn.
  ! O = &{}
  ! I = (λ&p. λxs. λ&o. λi. λe.
    % f = o
    ! O = λxs. f(@insert_mov(p, xs))
    ! I = λxs. i(@insert_mov(p, xs))
    ! E = f(@insert_mov(p, @N))
    xs(O, I, E))
  ! E = λxs. λo. λ&i. λe.
    % k = i
    ! O = λxs. k(xs)
    ! I = λxs. k(xs)
    ! E = k(@N)
    xs(O, I, E)
  n(O, I, E)

@insert_dup = λn.
  ! O = &{}
  ! I = (λ&p. λxs. λ&o. λi. λe.
    ! O = λxs. o(@insert_dup(p, xs))
    ! I = λxs. i(@insert_dup(p, xs))
    ! E = o(@insert_dup(p, @N))
    xs(O, I, E))
  ! E = λxs. λo. λ&i. λe.
    ! O = λxs. i(xs)
    ! I = λxs. i(xs)
    ! E = i(@N)
    xs(O, I, E)
  n(O, I, E)

@ins_mov = @insert_mov(@I(@I(@E)))
@ins_dup = @insert_dup(@I(@I(@E)))

@main = #T{
  @view(@rep2(@ins_mov, @O(@E))),  // OK:   #O{#O{#I{#E{}}}}
  @view(@rep3(@ins_mov, @O(@E))),  // FAIL: #O{λa.λb.a(...)}
  @view(@rep2(@ins_dup, @O(@E))),  // OK:   #O{#O{#I{#E{}}}}
  @view(@rep3(@ins_dup, @O(@E)))   // OK:   #O{#O{#I{#E{}}}}
}

I ran this on your branch, and got:

> ./main bug.hvm4 -s
#T{#O{#O{#I{#E{}}}},#O{λa.λb.A},#O{#O{#I{#E{}}}},#O{#O{#I{#E{}}}}};%A=A₀(λc.λd.λe.e);!A&F__e=a;
- Itrs: 1052 interactions
- Time: 0.000 seconds
- Perf: 10.85 M interactions/s

Note that the second result should be the same as all others, but it is: #O{λa.λb.A} with pending MOVs. Is this expected? Seems like a bug coming from MOV's sharing location.

@HaileyStorm
Copy link
Author

Hey @Lorenzobattistela . It turns out, it was silly of me to remove those memoization guards (see commit 9d07d3b). The MOV-NOD/SUP/RED need them because they can wrap lambdas. Whenever MOV memoizes a structure that contains lambdas, sharing that structure shares inner binder slots. Note MOV-RED wasn't tested in this example.


Summary:

  • Re‑enabled SAFE_MOV memoization guards for MOV‑NOD/SUP/RED (in addition to MOV‑LAM which was still in place). This prevents sharing outer structures that contain lambdas, which was the root cause of the new issue mismatch.
  • Added new test: test/new_issue.hvm4 (expected output comment included).

Evidence:

  • SAFE_MOV default: new_issue now matches DUP output.
  • HVM4_SAFE_MOV=0 reproduces the mismatch (2nd tuple element becomes a lambda), confirming memoization is unsafe for NOD/SUP/RED when lambdas are nested.

Tests:

  • ./test/all.sh (all pass, including bounty still at same interaction count etc.)

@jamespayor
Copy link

jamespayor commented Jan 10, 2026

I've felt confused about how dropping the "memoization" should work here, since in my picture the interaction calculus has a "use once" policy about terms, with its model sharing computation by doing destructive local reductions based on local context.

I believe that things are mostly working in this PR because we're usually encountering a GOT->MOV->ALO, and MOV-LAM is firing immediately after ALO emits a LAM. By skipping overwriting the loc with the new LAM, the code leaves the ALO in place to create a second LAM later on re-entry. (And re-entry does occur when things are DUP'd.)

So via re-allocating, these cases avoid a single LAM being used twice. But there is still potential for the reuse of variables that have been brought into scope (that are pointed to by DeBruijn indices underneath the ALO). And I think that means we can e.g. call a function in scope twice and observe clobbering.

I don't have a minimal example that exploits that. But in my attempts I did find this test case that I think breaks something:

@id = λa. a
@fst = λa. λb. a
@snd = λa. λb. b

@make = λss. λf.
  % s = ss
  f(s, s)

@movdup = λs. λf.
  !& x = @make(s)
  f(x(@fst), x(@snd))

@twice = λf. λa. @movdup(f, λf0. λf1. f0(f1(a)))

@main = @twice(@twice, @id)

// Should output: λa.a
// Actual output: λa.A₀(B₀(a));!A&F___=A;!B&F___=b;%A=C₁;!C&F___=c;

@HaileyStorm
Copy link
Author

HaileyStorm commented Jan 10, 2026

@jamespayor This one was harder! You definitely uncovered some issues with the original approach. I do believe I've resolved this concern though (it is, of course, still not a "perfect MOV," but the goal remains always-correct and sometimes-faster + bounty program success ... at least that's my understanding / goal).


  1. Parser fallback (payor_new): the parser now detects MOV variables that occur more than once within the same lambda region and rewrites that MOV binder to cloned‑let: BJM -> BJV + auto‑dup + (λx. body) val. That forces explicit duplication exactly in the “call a function in scope twice” scenario you described. This fixes payor_new without globally forcing MOV->DUP.
  2. SAFE_MOV (runtime): SAFE_MOV disables memoization for MOV‑LAM/NOD/SUP/RED (default). That prevents sharing consumable binder slots across uses, including cases where a LAM is nested inside a structure. I also removed MOV‑LAM template caching (it caused a stack overflow on new_issue) and added explicit DUP‑MOV / DUP‑GOT interactions.

Key results:

  • payor_new / payor_new_dup (your counter-example in the comment above): λa.a
  • payor_repro / payor_repro_dup (your original counter-example shared elsewhere): λa.a
  • f3 MOV/DUP: #O{#I{#E{}}}
  • new_issue (@Lorenzobattistela 's counter-example): all four branches match in SAFE_MOV; unsafe mode still fails (expected)
  • bounty MOV: 24,805 itrs vs DUP: 97,885 itrs

Tests:

  • ./hvm4_static/test/all.sh (PASS; clang warnings only)
  • ./mov_vs_dup_slow.sh (MOV 18 vs DUP 17 itrs)
  • ./mov_lam_bench.sh (MOV/DUP all 237 itrs; erase bench all 17 itrs

Additional tests I ran trying to find counter-examples:

  1. MOV used once in two different lambdas, each applied (should be risky if MOV‑LAM reuses the same binder slot):
  @main =
    % x = λa.a
    #T{
      (λu. x(u))(0n),
      (λv. x(v))(1n)
    }

Result: #T{0n,1n} (correct).

  1. MOV value is an application that reduces to a lambda (tests repeated evaluation of a shared redex):
  @main =
    % x = (λf. f)(λa.a)
    #T{
      (λu. x(u))(0n),
      (λv. x(v))(1n)
    }

Result: #T{0n,1n} (correct).

  1. MOV value is a constructor containing a lambda; both uses extract/apply it (tests inner LAM sharing):
  @proj = λp. (λ{#T: λf. f})(p)

  @main =
    % x = #T{λa.a}
    #T{
      @proj(x)(0n),
      (λu. @proj(x)(1n))(0n)
    }

Result: #T{0n,1n} (correct).

  1. DUP of a lambda that captures a MOV var (tests DUP/MOV interaction):
  @main =
    % x = λa.a
    !& d = λu. x(u)
    #T{ d₀(0n), d₁(1n) }

Result: #T{0n,1n} (correct).

@Lorenzobattistela
Copy link
Collaborator

@HaileyStorm indeed you got some surprising results / solved the specific buggy cases presented. However, as you said, it's not a perfect general MOV. This means that despite optimizing the specific programs, you can't rely on this in general for fusion. The problem with this is that you cannot rely on it for example to implement SupGen (which is currently the code that would benefit a lot from fusion for composition).

@HaileyStorm
Copy link
Author

HaileyStorm commented Jan 12, 2026

That's definitely fair (re SupGen particularly).
As I understand things, I would at this point probably remove MOV (heh)... or (rename it?) and leave it in for special cases and not use it for SupGen.
I am NOT going to make a stink.
But, I will this one (more) time make the point that I did meet the letter of the bounty as stated, and I did contribute to the understanding of why exactly it can't work as intended (contributed to the second case for the bounty that is).
Given that, I would greatly appreciate if Victor/HOC would be willing to compensate something for the result.
I'm not going to mark this closed myself in case y'all do want to keep the code for those edge cases (not the you couldn't just reopen or whatever). But I'm definitely OK with / understand closing the PR at this point.
I would definitely recommend Victor officially close the bounty at this point.
Thank you for the interesting experience and engagement with my ~solution regardless!

@VictorTaelin
Copy link
Member

Hey! Sorry I definitely disagree. The bounty was clear that the solution had to be correct in general, not just in this case:

your solution must be correct, in the sense that using either MOV nodes and DUP nodes to pass a value to different branches produces the same output

I specifically put this 10k bounty to a correct implementation of MOV. If the implementation fails in general (which seems to be the case here?) then it is not correct and it sounds a bit unfair to demand a compensation for it.

I'm not closing this bounty because I'm absolutely interested in a solution and I still think it could be possible.

Now, just to be clear (I'm a bit out of time right now so I can't read it all), my understanding is that there are common / sensible functions where MOV should work, yet this PR's solution fails, right? Also does it work on the tests on the wip/ directory on the main branch? @Lorenzobattistela

@HaileyStorm
Copy link
Author

Hey @VictorTaelin , thanks for the input and time. To be clear, I'm definitely not demanding. I want to stay friendly and I love this project.
I am not aware of any tests that currently fails*. It passes the all.sh test and produces output identical to DUP for all provided examples. Since the original submission, two new counter-examples were found - the first was pretty trivial, the second required more change.
I frankly don't understand enough, but I tend to trust @Lorenzobattistela 's assessment that it's not usable for SupGen. I could be wrong about that.

*I actually didn't notice the wip directory 😆. I wanted to reply ASAP and state that no counter-example has been provided or discovered by me (that hasn't been resolved so it's now correct). I haven't run the wip folder tests and will do that as soon as I can and update.

@Lorenzobattistela
Copy link
Collaborator

@HaileyStorm the solution to Payor's counter example, the one you replied with:

Parser fallback (payor_new): the parser now detects MOV variables that occur more than once within the same lambda region and rewrites that MOV binder to cloned‑let: BJM -> BJV + auto‑dup + (λx. body) val. That forces explicit duplication exactly in the “call a function in scope twice” scenario you described. This fixes payor_new without globally forcing MOV->DUP.

Can you point in the term what happens after your parser fix?

@id = λa. a
@fst = λa. λb. a
@snd = λa. λb. b

@make = λss. λf.
  % s = ss
  f(s, s)

@movdup = λs. λf.
  !& x = @make(s)
  f(x(@fst), x(@snd))

@twice = λf. λa. @movdup(f, λf0. λf1. f0(f1(a)))

@main = @twice(@twice, @id)

As far as I understand your MOV implementation, you either choose to run with SAFE_MOV or UNSAFE. (also, the result for 24k is unsafe or safe?)

My argument that it is not general enough for supgen is that the implementation will contain unsafe things (as the example i sent, the one who works in safe mode but not in unsafe), which would make us loose most of the optimization right? (I could be wrong)

@HaileyStorm
Copy link
Author

Thanks for the clarificaton @Lorenzobattistela .
I'm working to understand it and provide a more complete answer as to whether the parser fix is valid / ruins viability for SupGen, but for the moment:

  • The SAFE_MOVE is default on and used for all tests except those explicitly setting it off. It's left in as an option to demonstrate where the disabled memoization solves problems.
  • The 24k bounty result was with SAFE_MOV on.
  • I believe the memoization fix is valid/usable for SupGen, but it alone does not cover the payor_new scenario (it does cover the original bounty program / is sufficiently general).

@VictorTaelin
Copy link
Member

@HaileyStorm the WIP dir is new but it just includes some basic tests that any correct implementation of MOV should pass. If your proposal passes these tests, then it might be correct and I'll take a time to review it. Just let me know.

@HaileyStorm
Copy link
Author

Well, I really, really tried but...
I see that the parser re-write (and its equivalent in the runtime) are work-arounds, and I've been unable to fix it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants